11,392 research outputs found

    List decoding of noisy Reed-Muller-like codes

    Full text link
    First- and second-order Reed-Muller (RM(1) and RM(2), respectively) codes are two fundamental error-correcting codes which arise in communication as well as in probabilistically-checkable proofs and learning. In this paper, we take the first steps toward extending the quick randomized decoding tools of RM(1) into the realm of quadratic binary and, equivalently, Z_4 codes. Our main algorithmic result is an extension of the RM(1) techniques from Goldreich-Levin and Kushilevitz-Mansour algorithms to the Hankel code, a code between RM(1) and RM(2). That is, given signal s of length N, we find a list that is a superset of all Hankel codewords phi with dot product to s at least (1/sqrt(k)) times the norm of s, in time polynomial in k and log(N). We also give a new and simple formulation of a known Kerdock code as a subcode of the Hankel code. As a corollary, we can list-decode Kerdock, too. Also, we get a quick algorithm for finding a sparse Kerdock approximation. That is, for k small compared with 1/sqrt{N} and for epsilon > 0, we find, in time polynomial in (k log(N)/epsilon), a k-Kerdock-term approximation s~ to s with Euclidean error at most the factor (1+epsilon+O(k^2/sqrt{N})) times that of the best such approximation

    Approximate Sparse Recovery: Optimizing Time and Measurements

    Full text link
    An approximate sparse recovery system consists of parameters k,Nk,N, an mm-by-NN measurement matrix, Φ\Phi, and a decoding algorithm, D\mathcal{D}. Given a vector, xx, the system approximates xx by x^=D(Φx)\widehat x =\mathcal{D}(\Phi x), which must satisfy x^x2Cxxk2\| \widehat x - x\|_2\le C \|x - x_k\|_2, where xkx_k denotes the optimal kk-term approximation to xx. For each vector xx, the system must succeed with probability at least 3/4. Among the goals in designing such systems are minimizing the number mm of measurements and the runtime of the decoding algorithm, D\mathcal{D}. In this paper, we give a system with m=O(klog(N/k))m=O(k \log(N/k)) measurements--matching a lower bound, up to a constant factor--and decoding time O(klogcN)O(k\log^c N), matching a lower bound up to log(N)\log(N) factors. We also consider the encode time (i.e., the time to multiply Φ\Phi by xx), the time to update measurements (i.e., the time to multiply Φ\Phi by a 1-sparse xx), and the robustness and stability of the algorithm (adding noise before and after the measurements). Our encode and update times are optimal up to log(N)\log(N) factors

    Using learning design as a framework for supporting the design and reuse of OER

    Get PDF
    The paper will argue that adopting a learning design methodology may provide a vehicle for enabling better design and reuse of Open Educational Resources (OERs). It will describe a learning design methodology, which is being developed and implemented at the Open University in the UK. The aim is to develop a 'pick and mix' learning design toolbox of different resources and tools to help designers/teachers make informed decisions about creating new or adapting existing learning activities. The methodology is applicable for designers/teachers designing in a traditional context – such as creation of materials as part of a formal curriculum, but also has value for those wanting to create OERs or adapt and repurpose existing OERs. With the increasing range of OERs now available through initiatives as part of the Open Courseware movement, we believe that methodologies, such as the one we describe in this paper, which can help guide reuse and adaptation will become increasingly important and arguably are an important aspect of ensuring longer term sustainability and uptake of OERs. Our approach adopts an empirically based approach to understanding and representing the design process. This includes a range of evaluation studies (capturing of case studies, interviews with designers/teachers, in-depth course evaluation and focus groups/workshops), which are helping to develop our understanding of how designers/teachers go about creating new learning activities. Alongside this we are collating an extensive set of tools and resources to support the design process, as well as developing a new Learning Design tool that helps teachers articulate and represent their design ideas. The paper will describe how we have adapted a mind mapping and argumentation tool, Compendium, for this purpose and how it is being used to help designers and teachers create and share learning activities. It will consider how initial evaluation of the use of the tool for learning design has been positive; users report that the tool is easy to use and helps them organise and articulate their learning designs. Importantly the tool also enables them to share and discuss their thinking about the design process. However it is also clear that visualising the design process is only one aspect of design, which is complex and multi-faceted

    A comparison of the quality of image acquisition between two different sidestream dark field video-microscopes

    Get PDF
    Sidestream dark field (SDF) imaging enables direct visualisation of the microvasculature from which quantification of key variables is possible. The new MicroScan USB3 (MS-U) video-microscope is a hand-held SDF device that has undergone significant technical upgrades from its predecessor, the MicroScan Analogue (MS-A). The MS-U claims superior quality of sublingual microcirculatory image acquisition over the MS-A, however, this has yet to be robustly confirmed. In this manuscript, we therefore compare the quality of image acquisition between these two devices. The microcirculation of healthy volunteers was visualised to generate thirty video images for each device. Two independent raters, blinded to the device type, graded the quality of the images according to the six different traits in the Microcirculation Image Quality Score (MIQS) system. Chi-squared tests and Kappa statistics were used to compare not only the distribution of scores between the devices, but also agreement between raters. MS-U showed superior image quality over MS-A in three of out six MIQS traits; MS-U had significantly more optimal images by illumination (MS-U 95% optimal images, MS-A 70% optimal images (p-value 0.003)), by focus (MS-U 70% optimal images, MS-A 35% optimal images (p-value 0.002)) and by pressure (MS-U 72.5% optimal images, MS-A 47.5% optimal images (p-value 0.02)). For each trait, there was at least 85% agreement between the raters, and all the scores for each trait were independent of the rater (all p-values > 0.05). These results show that the new MS-U provides a superior quality of sublingual microcirculatory image acquisition when compared to old MS-

    Error Threshold for Color Codes and Random 3-Body Ising Models

    Get PDF
    We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random 3-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p_c = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities does not necessarily imply lower resistance to noise.Comment: 4 pages, 3 figures, 1 tabl

    Synthesis of a Series of Diaminoindoles

    Get PDF
    [Image: see text] A selection of 3,4-diaminoindoles were required for a recent drug discovery project. To this end, a 10-step synthesis was developed from 4-nitroindole. This synthesis was subsequently adapted and used to synthesize 3,5-; 3,6-; and 3,7-diaminoindoles from the corresponding 5-, 6-, or 7-nitroindole. These novel intermediates feature orthogonal protecting groups that allow them to be further diversified. This is the first reported synthesis of these types of compounds

    A method for exploratory repeated-measures analysis applied to a breast-cancer screening study

    Get PDF
    When a model may be fitted separately to each individual statistical unit, inspection of the point estimates may help the statistician to understand between-individual variability and to identify possible relationships. However, some information will be lost in such an approach because estimation uncertainty is disregarded. We present a comparative method for exploratory repeated-measures analysis to complement the point estimates that was motivated by and is demonstrated by analysis of data from the CADET II breast-cancer screening study. The approach helped to flag up some unusual reader behavior, to assess differences in performance, and to identify potential random-effects models for further analysis.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS481 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Object Repetition Leads to Local Increases in the Temporal Coordination of Neural Responses

    Get PDF
    Experience with visual objects leads to later improvements in identification speed and accuracy (“repetition priming”), but generally leads to reductions in neural activity in single-cell recording studies in animals and fMRI studies in humans. Here we use event-related, source-localized MEG (ER-SAM) to evaluate the possibility that neural activity changes related to priming in occipital, temporal, and prefrontal cortex correspond to more temporally coordinated and synchronized activity, reflected in local increases in the amplitude of low-frequency activity fluctuations (i.e. evoked power) that are time-locked to stimulus onset. Subjects (N = 17) identified pictures of objects that were either novel or repeated during the session. Tests in two separate low-frequency bands (theta/alpha: 5–15 Hz; beta: 15–35 Hz) revealed increases in evoked power (5–15 Hz) for repeated stimuli in the right fusiform gyrus, with the earliest significant increases observed 100–200 ms after stimulus onset. Increases with stimulus repetition were also observed in striate/extrastriate cortex (15–35 Hz) by 200–300 ms post-stimulus, along with a trend for a similar pattern in right lateral prefrontal cortex (5–15 Hz). Our results suggest that experience-dependent reductions in neural activity may affect improved behavioral identification through more coordinated, synchronized activity at low frequencies, constituting a mechanism for more efficient neural processing with experience

    Changes in labial capillary density on ascent to and descent from high altitude.

    Get PDF
    Present knowledge of how the microcirculation is altered by prolonged exposure to hypoxia at high altitude is incomplete and modification of existing analytical techniques may improve our knowledge considerably. We set out to use a novel simplified method of measuring in vivo capillary density during an expedition to high altitude using a CytoCam incident dark field imaging video-microscope. The simplified method of data capture involved recording one-second images of the mucosal surface of the inner lip to reveal data about microvasculature density in ten individuals. This was done on ascent to, and descent from, high altitude. Analysis was conducted offline by two independent investigators blinded to the participant identity, testing conditions and the imaging site.  Additionally we monitored haemoglobin concentration and haematocrit data to see if we could support or refute mechanisms of altered density relating to vessel recruitment. Repeated sets of paired values were compared using Kruskall Wallis Analysis of Variance tests, whilst comparisons of values between sites was by related samples Wilcoxon Signed Rank Test. Correlation between different variables was performed using Spearman's rank correlation coefficient, and concordance between analysing investigators using intra-class correlation coefficient. There was a significant increase in capillary density from London on ascent to high altitude; median capillaries per field of view area increased from 22.8 to 25.3 (p=0.021). There was a further increase in vessel density during the six weeks spent at altitude (25.3 to 32.5, p=0.017). Moreover, vessel density remained high on descent to Kathmandu (31.0 capillaries per field of view area), despite a significant decrease in haemoglobin concentration and haematocrit. Using a simplified technique, we have demonstrated an increase in capillary density on early and sustained exposure to hypobaric hypoxia at thigh altitude, and that this remains elevated on descent to normoxia. The technique is simple, reliable and reproducible
    corecore